Update dependency accelerate to v1.11.0 #213
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==1.10.0->==1.11.0Warning
Some dependencies could not be looked up. Check the warning logs for more information.
Release Notes
huggingface/accelerate (accelerate)
v1.11.0: : TE MXFP8, FP16/BF16 with MPS, Python 3.10Compare Source
TE MXFP8 support
We've added support for MXFP8 in our TransformerEngine integration. To use that, you need to set
use_mxfp8_block_scalinginfp8_config. See nvidia docs [here]. (https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html#MXFP8-and-block-scaling)FP16/BF16 Training for MPS devices
BF16 and FP16 support for MPS devices is finally here. You can now pass
mixed_precision = "fp16" or "bf16"when training on a mac (fp16requires torch 2.8 andbf16requires torch 2.6)FSDP updates
The following PRs add respectively support to
ignored_paramsandno_sync()for FSDPv2:Mixed precision can now be passed as a dtype string from accelerate cli flag or
fsdp_configin accelerate config file:Nd-parallel updates
Some minor updates concerning nd-parallelism.
Bump to Python 3.10
We've dropped support for python 3.9 as it reached EOL in October.
Lots of minor fixes:
cpuand offloaded tometaby @Qubitium in #3796within Accelerator.autocast()instead of__enter__()and__exit__()for more elegant style. by @EquationWalker in #3767SWANLAB_MODEby @SunMarc in #3808New Contributors
Full Changelog: huggingface/accelerate@v1.10.1...v1.11.0
v1.10.1: : PatchfixCompare Source
Full Changelog: huggingface/accelerate@v1.10.0...v1.10.1
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
To execute skipped test pipelines write comment
/ok-to-test.This PR has been generated by MintMaker (powered by Renovate Bot).